Lost in the Middle: How Language Models Use Long Contexts
https://arxiv.org/abs/2307.03172
multi-document question answering and key-value retrieval
We find that performance can degrade significantly when changing the position of relevant information
we observe that performance is often highest when relevant information occurs at the beginning or end of the input context, and significantly degrades when models must access relevant information in the middle of long contexts, even for explicitly long-context models.
U字型 U-shaped performance curve (Figure 1)
Figure 5
スタンフォード大などの研究グループによると、大規模言語モデルに対して"重要な情報"はプロンプトの"最初や最後"に配置すると、モデルがより効果的に利用できる可能性があります。